18 countries, including the US, have agreed that companies designing and using AI need to develop and deploy it in a way that keeps customers and the wider public safe.AI 

Secure AI Development: US, Britain, and Others Sign Pact to Stop Misuse

The United States, Britain and more than a dozen other countries on Sunday unveiled what a senior U.S. official described as the first detailed international agreement to keep artificial intelligence safe from rogue actors and require companies to create artificial intelligence systems that are “secure by design.”

In a 20-page document released on Sunday, 18 countries agreed that companies designing and using AI must develop and deploy it in a way that keeps customers and the wider public safe from misuse.

The agreement is non-binding and contains mostly general recommendations, such as monitoring AI systems for abuse, protecting data from falsification, and auditing software vendors.

Still, Jen Easterly, director of the US Cybersecurity and Infrastructure Security Agency, said it was important that so many countries named the idea that AI systems must put security first.

“This is the first time we’ve seen confirmation that these features shouldn’t just be about cool features and how quickly we can get them to market or how we can compete to drive down costs,” Easterly told Reuters, saying the guidelines represent. “Agreement that the most important thing to do in the design phase is safety.”

The agreement is the latest in a series of initiatives – few of which have teeth – by governments around the world to shape the development of artificial intelligence, whose weight is increasingly felt in industry and society in general.

In addition to the United States and Britain, the 18 countries that have signed the new guidelines are Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria and Singapore.

The framework addresses questions about how to prevent AI technology from being hijacked by hackers, and includes recommendations such as releasing models only after proper security testing.

It does not address difficult questions about the appropriate use of AI or how the data that feeds these models is collected.

The rise of artificial intelligence has raised many concerns, including fears that it could be used to disrupt the democratic process, turbocharge fraud, or lead to dramatic job losses, among other harms.

Europe is ahead of the US in terms of AI regulations, with lawmakers there crafting AI rules. France, Germany and Italy also recently reached an agreement on how to regulate artificial intelligence, supporting “mandatory self-regulation through codes of conduct” for so-called basic models of artificial intelligence designed to produce a wide range of outcomes.

The Biden administration has pushed lawmakers to regulate AI, but the polarized US Congress has made little progress in passing effective regulation.

The White House sought to reduce AI risks to consumers, workers and minority groups while strengthening national security with a new executive order in October.

Related posts

Leave a Comment